Parameter Sharing Exploration and Hetero-Center Triplet Loss for Visible-Thermal Person Re-Identification

نویسندگان

چکیده

This paper focuses on the visible-thermal cross-modality person re-identification (VT Re-ID) task, whose goal is to match images between daytime visible modality and nighttime thermal modality. The two-stream network usually adopted address discrepancy, most challenging problem for VT Re-ID, by learning multi-modality features. In this paper, we explore how many parameters of should share, which still not well investigated in existing literature. By splitting ResNet50 model construct modality-specific feature extracting modality-sharing embedding network, experimentally demonstrate effect sharing Re-ID. Moreover, framework part-level learning, propose hetero-center based triplet loss relax strict constraint traditional through replacing comparison anchor all other samples center centers. With extremely simple means, proposed method can significantly improve Re-ID performance. experimental results two datasets show that our distinctly outperforms state-of-the-art methods large margins, especially RegDB dataset achieving superior performance, rank1/mAP/mINP 91.05%/83.28%/68.84%. It be a new baseline with but effective strategy.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

In Defense of the Triplet Loss for Person Re-Identification

In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person reidentification subfield is no exception to this, thanks to the notable publication of the Market-1501 and MARS datasets and several strong deep learning approaches. Unfortunate...

متن کامل

Deep Representation Learning with Part Loss for Person Re-Identification

Learning discriminative representations for unseen person images is critical for person Re-Identification (ReID). Most of current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations commonly focus on several body parts discriminative to the training set,...

متن کامل

Multi-Channel Pyramid Person Matching Network for Person Re-Identification

In this work, we present a Multi-Channel deep convolutional Pyramid Person Matching Network (MC-PPMN) based on the combination of the semantic-components and the colortexture distributions to address the problem of person reidentification. In particular, we learn separate deep representations for semantic-components and color-texture distributions from two person images and then employ pyramid ...

متن کامل

Deep-Person: Learning Discriminative Deep Features for Person Re-Identification

Recently, many methods of person re-identification (ReID) rely on part-based feature representation to learn a discriminative pedestrian descriptor. However, the spatial context between these parts is ignored for the independent extractor on each separate part. In this paper, we propose to apply Long Short-Term Memory (LSTM) in an end-to-end way to model the pedestrian, seen as a sequence of bo...

متن کامل

Pyramid Person Matching Network for Person Re-identification

In this work, we present a deep convolutional pyramid person matching network (PPMN) with specially designed Pyramid Matching Module to address the problem of person reidentification. The architecture takes a pair of RGB images as input, and outputs a similiarity value indicating whether the two input images represent the same person or not. Based on deep convolutional neural networks, our appr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Multimedia

سال: 2021

ISSN: ['1520-9210', '1941-0077']

DOI: https://doi.org/10.1109/tmm.2020.3042080